Skip to content

Conversation

gmac
Copy link
Contributor

@gmac gmac commented Aug 31, 2025

This adds a hook into the execution runtime that allows an evaluated selection to exit with its resolved inner value. This allows a breadth-first execution process to run a single generation of resolvers at a time across a set of objects, ex:

breadth-first

This general breadth capability relies on two runtime methods:

  • evaluate_selection(result_key, ast_nodes, selections_result)
  • exit_with_inner_result?(inner_result, result_name, selection_result)

The expected capabilities are covered by a simple example of a breadth-based runtime in tests. This is still an experimental workflow, so it seems worthwhile to keep the new code footprint minimal for the time being (just exit_with_inner_result?) and in a state that can be backed out easily.

In the future, we'd ideally formalize a structure like this SimpleBreadthRuntime as a supported library feature. The actual breadth prototype I have in the works uses this same basic design while caching breadth objects across fields and consolidating the lazy hooks across scopes. It appears capable of passing some existing test suites while running slightly faster than native depth execution.

end

@dataloader.run
GraphQL::Execution::Interpreter::Resolve.resolve_each_depth(@lazies_at_depth, @dataloader)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the actual prototype, I’m running dataloader inline with the field but this lazy step runs out-of-band after a bunch of fields have been queued. Sounds like with #5422, I’d just run dataloader once in that out-of-band position?


def evaluate_breadth_selection(objects, parent_type, node)
result_key = node.alias || node.name
@breadth_results_by_key[result_key] = Array.new(objects.size)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the actual prototype, breadth results are keyed with a unique representation of a field path.

@gmac gmac force-pushed the gmac--cardinal-runtime-shim branch from dd72284 to a221c9a Compare August 31, 2025 20:26
@rmosolgo
Copy link
Owner

rmosolgo commented Sep 2, 2025

I'm game to merge this as-is with the caveat that the integration point for this customization will change in the near future. I expect to continue modifying execution code with ideas from #5389, specifically using a queue instead of recursive method calls and merging GraphQLResult objects into GraphQL::Schema::Object. So although the details will change, I can commit to supporting this kind of possibility in the future 👍

@gmac
Copy link
Contributor Author

gmac commented Sep 2, 2025

I expect to continue modifying execution code with ideas from #5389, specifically using a queue instead of recursive method calls and merging GraphQLResult objects into GraphQL::Schema::Object. So although the details will change, I can commit to supporting this kind of possibility in the future 👍

This is an interesting prospect. What's the grain of the queued work? Are you queuing an object+field at a time, then having the ejected result feed the next enqueued generation? If so, this sounds like an excellent design for a formal integration. If a breadth executor could enqueue a whole bunch of values for a field and get all the inner results back out, that gets us exactly what we need.

@gmac
Copy link
Contributor Author

gmac commented Sep 2, 2025

and merging GraphQLResult objects into GraphQL::Schema::Object.

I thoroughly support this, as it amounts to a substantial reduction in allocations. Any allocation that matches response objects (or worse, fields) at a 1:1 standard should be considered a premium. In Cardinal, I believe the only 1:1 response allocation we do is the response tree itself, which is unavoidable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants